Goto

Collaborating Authors

 query node


GraphFew-shotLearningwith Task-specificStructures

Neural Information Processing Systems

Graph few-shot learning is of great importance among various graph learning tasks. Under thefew-shot scenario, models areoftenrequired toconduct classification givenlimited labeled samples. Existing graph few-shot learning methods typically leverage Graph Neural Networks (GNNs) and perform classification across a series of meta-tasks. Nevertheless, these methods generally rely on the original graph (i.e., the graph that the meta-task is sampled from) to learn node representations.


Supplement to Node Classification on Graphs with Few-Shot Novel Labels via Meta Transformed Network Embedding 1 Additional Algorithm Details 1.1 Details of the Transformation Function

Neural Information Processing Systems

The support nodes are either positive or negative. For the transformation function, we stack multiple computation blocks as shown in Figure 1. The stacking mechanism helps the function capture comprehensive relationships between nodes such that the performance is boosted. In each computation block, there are mainly two modules. The detailed architecture of the self-attention module is illustrated in Figure 1.




Graph Few-shot Learning with Task-specific Structures

Neural Information Processing Systems

Under the few-shot scenario, models are often required to conduct classification given limited labeled samples. Existing graph few-shot learning methods typically leverage Graph Neural Networks (GNNs) and perform classification across a series of meta-tasks. Nevertheless, these methods generally rely on the original graph (i.e., the graph that the meta-task is sampled from) to learn node


Supplement to Node Classification on Graphs with Few-Shot Novel Labels via Meta Transformed Network Embedding 1 Additional Algorithm Details 1.1 Details of the Transformation Function

Neural Information Processing Systems

The support nodes are either positive or negative. For the transformation function, we stack multiple computation blocks as shown in Figure 1. The stacking mechanism helps the function capture comprehensive relationships between nodes such that the performance is boosted. In each computation block, there are mainly two modules. The detailed architecture of the self-attention module is illustrated in Figure 1.



PLACE: Prompt Learning for Attributed Community Search

Fang, Shuheng, Zhao, Kangfei, Zhang, Rener, Rong, Yu, Yu, Jeffrey Xu

arXiv.org Artificial Intelligence

In this paper, we propose PLACE (Prompt Learning for Attributed Community Search), an innovative graph prompt learning framework for ACS. Enlightened by prompt-tuning in Natural Language Processing (NLP), where learnable prompt tokens are inserted to contextualize NLP queries, PLACE integrates structural and learnable prompt tokens into the graph as a query-dependent refinement mechanism, forming a prompt-augmented graph. Within this prompt-augmented graph structure, the learned prompt tokens serve as a bridge that strengthens connections between graph nodes for the query, enabling the GNN to more effectively identify patterns of structural cohesiveness and attribute similarity related to the specific query. We employ an alternating training paradigm to optimize both the prompt parameters and the GNN jointly. Moreover, we design a divide-and-conquer strategy to enhance scalability, supporting the model to handle million-scale graphs. Extensive experiments on 9 real-world graphs demonstrate the effectiveness of PLACE for three types of ACS queries, where PLACE achieves higher F1 scores by 22% compared to the state-of-the-arts on average.


Pre-trained Prompt-driven Semi-supervised Local Community Detection

Ni, Li, Xu, Hengkai, Mu, Lin, Zhang, Yiwen, Luo, Wenjian

arXiv.org Artificial Intelligence

Semi-supervised local community detection aims to leverage known communities to detect the community containing a given node. Although existing semi-supervised local community detection studies yield promising results, they suffer from time-consuming issues, highlighting the need for more efficient algorithms. Therefore, we apply the "pre-train, prompt" paradigm to semi-supervised local community detection and propose the Pre-trained Prompt-driven Semi-supervised Local community detection method (PPSL). PPSL consists of three main components: node encoding, sample generation, and prompt-driven fine-tuning. Specifically, the node encoding component employs graph neural networks to learn the representations of nodes and communities. Based on representations of nodes and communities, the sample generation component selects known communities that are structurally similar to the local structure of the given node as training samples. Finally, the prompt-driven fine-tuning component leverages these training samples as prompts to guide the final community prediction. Experimental results on five real-world datasets demonstrate that PPSL outperforms baselines in both community quality and efficiency.


Community Search in Time-dependent Road-social Attributed Networks

Ni, Li, Xu, Hengkai, Mu, Lin, Zhang, Yiwen, Luo, Wenjian

arXiv.org Artificial Intelligence

Real-world networks often involve both keywords and locations, along with travel time variations between locations due to traffic conditions. However, most existing cohesive subgraph-based community search studies utilize a single attribute, either keywords or locations, to identify communities. They do not simultaneously consider both keywords and locations, which results in low semantic or spatial cohesiveness of the detected communities, and they fail to account for variations in travel time. Additionally, these studies traverse the entire network to build efficient indexes, but the detected community only involves nodes around the query node, leading to the traversal of nodes that are not relevant to the community. Therefore, we propose the problem of discovering semantic-spatial aware k-core, which refers to a k-core with high semantic and time-dependent spatial cohesiveness containing the query node. To address this problem, we propose an exact and a greedy algorithm, both of which gradually expand outward from the query node. They are local methods that only access the local part of the attributed network near the query node rather than the entire network. Moreover, we design a method to calculate the semantic similarity between two keywords using large language models. This method alleviates the disadvantages of keyword-matching methods used in existing community search studies, such as mismatches caused by differently expressed synonyms and the presence of irrelevant words. Experimental results show that the greedy algorithm outperforms baselines in terms of structural, semantic, and time-dependent spatial cohesiveness.